min 2
- Europe > France (0.04)
- North America > United States (0.04)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Data Science > Data Mining > Big Data (0.46)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- North America > United States > California (0.04)
- Information Technology > Data Science (0.46)
- Information Technology > Artificial Intelligence (0.46)
Convergence of Actor-Critic Methods with Multi-Layer Neural Networks
The early theory of actor-critic methods considered convergence using linear function approximators for the policy and value functions. Recent work has established convergence using neural network approximators with a single hidden layer. In this work we are taking the natural next step and establish convergence using deep neural networks with an arbitrary number of hidden layers, thus closing a gap between theory and practice. We show that actor-critic updates projected on a ball around the initial condition will converge to a neighborhood where the average of the squared gradients is O (1 / m) + O (ϵ), with m being the width of the neural network and ϵ the approximation quality of the best critic neural network over the projected set.
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
4c4c937b67cc8d785cea1e42ccea185c-Supplemental.pdf
In our method and all the baselines except surrogate-based triage, we use the cross-entropy loss and implement SGD using Adam optimizer [40] with initial learning rate set by cross validation independently foreachmethod andleveloftriageb. Insurrogate-based triage, weusethelossand optimization method used by the authors in their public implementation. Moreover, we use early stopping with the patience parameterep = 10,i.e.,we stop the training process ifno reduction of cross entropy loss is observed on the validation set. This suggests that the humans aremore accurate than thepredictivemodel throughout theentire feature space. This suggests that the humans are less accurate than the predictive model in some regions of the featurespace.
SupplementaryMaterials AProofofTheorem2: AsymptoticConvergenceofRobustQ-Learning
From[BorkarandMeyn,2000],weknowthatthestochastic approximation (18) converges to the fixed point ofT, i.e., Q . Finally, to show Theorem 3, we only need to show each term in(56) is smaller than . In this section we develop the finite-time analysis of the robust TDC algorithm. We note that recently there are several works [Srikant and Ying, 2019, Xu and Liang, 2021, Kaledin et al., 2020] on finite-time analysis of RL algorithms that do not need theprojection. Specifically, the problem in [Srikant and Ying, 2019] is for one time scalelinear stochastic approximation.
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Europe > France > Auvergne-Rhône-Alpes > Isère > Grenoble (0.04)
BountyBench: Dollar Impact of AI Agent Attackers and Defenders on Real-World Cybersecurity Systems
Zhang, Andy K., Ji, Joey, Menders, Celeste, Dulepet, Riya, Qin, Thomas, Wang, Ron Y., Wu, Junrong, Liao, Kyleen, Li, Jiliang, Hu, Jinghan, Hong, Sara, Demilew, Nardos, Murgai, Shivatmica, Tran, Jason, Kacheria, Nishka, Ho, Ethan, Liu, Denis, McLane, Lauren, Bruvik, Olivia, Han, Dai-Rong, Kim, Seungwoo, Vyas, Akhil, Chen, Cuiyuanxiu, Li, Ryan, Xu, Weiran, Ye, Jonathan Z., Choudhary, Prerit, Bhatia, Siddharth M., Sivashankar, Vikram, Bao, Yuxuan, Song, Dawn, Boneh, Dan, Ho, Daniel E., Liang, Percy
AI agents have the potential to significantly alter the cybersecurity landscape. Here, we introduce the first framework to capture offensive and defensive cyber-capabilities in evolving real-world systems. Instantiating this framework with BountyBench, we set up 25 systems with complex, real-world codebases. To capture the vulnerability lifecycle, we define three task types: Detect (detecting a new vulnerability), Exploit (exploiting a given vulnerability), and Patch (patching a given vulnerability). For Detect, we construct a new success indicator, which is general across vulnerability types and provides localized evaluation. We manually set up the environment for each system, including installing packages, setting up server(s), and hydrating database(s). We add 40 bug bounties, which are vulnerabilities with monetary awards from \$10 to \$30,485, covering 9 of the OWASP Top 10 Risks. To modulate task difficulty, we devise a new strategy based on information to guide detection, interpolating from identifying a zero day to exploiting a given vulnerability. We evaluate 10 agents: Claude Code, OpenAI Codex CLI with o3-high and o4-mini, and custom agents with o3-high, GPT-4.1, Gemini 2.5 Pro Preview, Claude 3.7 Sonnet Thinking, Qwen3 235B A22B, Llama 4 Maverick, and DeepSeek-R1. Given up to three attempts, the top-performing agents are Codex CLI: o3-high (12.5% on Detect, mapping to \$3,720; 90% on Patch, mapping to \$14,152), Custom Agent: Claude 3.7 Sonnet Thinking (67.5% on Exploit), and Codex CLI: o4-mini (90% on Patch, mapping to \$14,422). Codex CLI: o3-high, Codex CLI: o4-mini, and Claude Code are more capable at defense, achieving higher Patch scores of 90%, 90%, and 87.5%, compared to Exploit scores of 47.5%, 32.5%, and 57.5% respectively; while the custom agents are relatively balanced between offense and defense, achieving Exploit scores of 17.5-67.5% and Patch scores of 25-60%.
- Research Report > New Finding (0.92)
- Research Report > Experimental Study (0.67)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.70)